21 research outputs found

    Debugging of Behavioural Models using Counterexample Analysis

    Get PDF
    International audienceModel checking is an established technique for automatically verifying that a model satisfies a given temporal property. When the model violates the property, the model checker returns a counterexample, which is a sequence of actions leading to a state where the property is not satisfied. Understanding this counterexample for debugging the specification is a complicated task for several reasons: (i) the counterexample can contain a large number of actions, (ii) the debugging task is mostly achieved manually, and (iii) the counterexample does not explicitly highlight the source of the bug that is hidden in the model. This article presents a new approach that improves the usability of model checking by simplifying the comprehension of counterexamples. To do so, we first extract in the model all the counterexamples. Second, we define an analysis algorithm that identifies actions that make the model skip from incorrect to correct behaviours, making these actions relevant from a debugging perspective. Third, we develop a set of abstraction techniques to extract these actions from counterexamples. Our approach is fully automated by a tool we implemented and was applied on real-world case studies from various application areas for evaluation purposes

    Visual Debugging of Behavioural Models

    Get PDF
    International audienceIn this paper, we present the CLEAR visualizer tool, which supports the debugging task of behavioural models being analyzed using model checking techniques. The tool provides visualization techniques for simplifying the comprehension of counterexamples by highlighting some specific states in the model where a choice is possible between executing a correct behaviour or falling into an erroneous part of the model. Our tool was applied successfully to many case studies and allowed us to visually identify several kinds of typical bugs. Video URL: https://youtu.be/nJLOnRaPe1A

    Space/Time Noncommutativity in String Theories without Background Electric Field

    Get PDF
    The appearance of space/time non-commutativity in theories of open strings with a constant non-diagonal background metric is considered. We show that, even if the space-time coordinates commute, when there is a metric with a time-space component, no electric field and the boundary condition along the spatial direction is Dirichlet, a Moyal phase still arises in products of vertex operators. The theory is in fact dual to the non-commutatitive open string (NCOS) theory. The correct definition of the vertex operators for this theory is provided. We study the system also in the presence of a BB field. We consider the case in which the Dirichlet spatial direction is compactified and analyze the effect of these background on the closed string spectrum. We then heat up the system. We find that the Hagedorn temperature depends in a non-extensive way on the parameters of the background and it is the same for the closed and the open string sectors.Comment: 18 pages, JHEP styl

    Taking Arduino to the Internet of things: the ASIP programming model

    Get PDF
    Micro-controllers such as Arduino are widely used by all kinds of makers worldwide. Popularity has been driven by Arduino’s simplicity of use and the large number of sensors and libraries available to extend the basic capabilities of these controllers. The last decade has witnessed a surge of software engineering solutions for “the Internet of Things”, but in several cases these solutions require computational resources that are more advanced than simple, resource-limited micro-controllers. Surprisingly, in spite of being the basic ingredients of complex hardware–software systems, there does not seem to be a simple and flexible way to (1) extend the basic capabilities of micro-controllers, and (2) to coordinate inter-connected micro-controllers in “the Internet of Things”. Indeed, new capabilities are added on a per-application basis and interactions are mainly limited to bespoke, point-to-point protocols that target the hardware I/O rather than the services provided by this hardware. In this paper we present the Arduino Service Interface Programming (ASIP) model, a new model that addresses the issues above by (1) providing a “Service” abstraction to easily add new capabilities to micro-controllers, and (2) providing support for networked boards using a range of strategies, including socket connections, bridging devices, MQTT-based publish–subscribe messaging, discovery services, etc. We provide an open-source implementation of the code running on Arduino boards and client libraries in Java, Python, Racket and Erlang. We show how ASIP enables the rapid development of non-trivial applications (coordination of input/output on distributed boards and implementation of a line-following algorithm for a remote robot) and we assess the performance of ASIP in several ways, both quantitative and qualitative

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Debugging of Behavioural Models using Counterexample Analysis

    No full text
    Le model checking est une technique établie pour vérifier automatiquement qu’un modèle vérifie une propriété temporelle donnée. Lorsque le modèle viole la propriété, le model checker retourne un contre-exemple, i.e., une séquence d’actions menant à un état où la propriété n’est pas satisfaite. Comprendre ce contre-exemple pour le débogage de la spécification est une tâche compliquée pour plusieurs raisons: (i) le contre-exemple peut contenir un grand nombre d’actions; (ii) la tâche de débogage est principalement réalisée manuellement; (iii) le contre-exemple n’indique pas explicitement la source du bogue qui est caché dans le modèle; (iv) les actions les plus pertinentes ne sont pas mises en évidence dans le contre-exemple; (v) le contre-exemple ne donne pas une vue globale du problème.Ce travail présente une nouvelle approche qui rend plus accessible le model checking en simplifiant la compréhension des contre-exemples. Notre solution vise à ne garder que des actions dans des contre-exemples pertinents à des fins de débogage. Pour y parvenir, on détecte dans les modèles des choix spécifiques entre les transitions conduisant à un comportement correct ou à une partie du modèle erroné. Ces choix, que nous appelons neighbourhoods, se révèlent être de grande importance pour la compréhension du bogue à travers le contre-exemple. Pour extraire de tels choix, nous proposons deux méthodes différentes. La première méthode concerne le débogage des contre-exemples pour la violations de propriétés de sûreté. Pour ce faire, elle construit un nouveau modèle de l’original contenant tous les contre-exemples, puis compare les deux modèles pour identifier les neighbourhoods. La deuxième méthode concerne le débogage des contre-exemples pour la violations de propriétés de vivacité. À partir d’une propriété de vivacité, elle étend le modèle avec des informations de préfixe / suffixe correspondants à cette propriété. Ce modèle enrichi est ensuite analysé pour identifier les neighbourhoods.Un modèle annoté avec les neighbourhoods peut être exploité de deux manières. Tout d’abord, la partie erronée du modèle peut être visualisée en se focalisant sur les neighbourhoods, afin d’avoir une vue globale du comportement du bogue. Deuxièmement, un ensemble de techniques d’abstraction que nous avons développées peut être utilisé pour extraire les actions plus pertinentes à partir de contre-exemples, ce qui facilite leur compréhension. Notre approche est entièrement automatisée par un outil que nous avons implémenté et qui a été validé sur des études de cas réels dans différents domaines d’application.Model checking is an established technique for automatically verifying that a model satisfies a given temporal property. When the model violates the property, the model checker returns a counterexample, which is a sequence of actions leading to a state where the property is not satisfied. Understanding this counterexample for debugging the specification is a complicated task for several reasons: (i) the counterexample can contain a large number of actions; (ii) the debugging task is mostly achieved manually; (iii) the counterexample does not explicitly point out the source of the bug that is hidden in the model; (iv) the most relevant actions are not highlighted in the counterexample; (v) the counterexample does not give a global view of the problem.This work presents a new approach that improves the usability of model checking by simplifying the comprehension of counterexamples. Our solution aims at keeping only actions in counterexamples that are relevant for debugging purposes. This is achieved by detecting in the models some specific choices between transitions leading to a correct behaviour or falling into an erroneous part of the model. These choices, which we call "neighbourhoods", turn out to be of major importance for the understanding of the bug behind the counterexample. To extract such choices we propose two different methods. One method aims at supporting the debugging of counterexamples for safety properties violations. To do so, it builds a new model from the original one containing all the counterexamples, and then compares the two models to identify neighbourhoods. The other method supports the debugging of counterexamples for liveness properties violations. Given a liveness property, it extends the model with prefix / suffix information w.r.t. that property. This enriched model is then analysed to identify neighbourhoods.A model annotated with neighbourhoods can be exploited in two ways. First, the erroneous part of the model can be visualized with a specific focus on neighbourhoods, in order to have a global view of the bug behaviour. Second, a set of abstraction techniques we developed can be used to extract relevant actions from counterexamples, which makes easier their comprehension. Our approach is fully automated by a tool we implemented and that has been validated on real-world case studies from various application areas

    Débogage de modèles comportementaux par analyse de contre-exemple

    No full text
    Model checking is an established technique for automatically verifying that a model satisfies a given temporal property. When the model violates the property, the model checker returns a counterexample, which is a sequence of actions leading to a state where the property is not satisfied. Understanding this counterexample for debugging the specification is a complicated task for several reasons: (i) the counterexample can contain a large number of actions; (ii) the debugging task is mostly achieved manually; (iii) the counterexample does not explicitly point out the source of the bug that is hidden in the model; (iv) the most relevant actions are not highlighted in the counterexample; (v) the counterexample does not give a global view of the problem.This work presents a new approach that improves the usability of model checking by simplifying the comprehension of counterexamples. Our solution aims at keeping only actions in counterexamples that are relevant for debugging purposes. This is achieved by detecting in the models some specific choices between transitions leading to a correct behaviour or falling into an erroneous part of the model. These choices, which we call "neighbourhoods", turn out to be of major importance for the understanding of the bug behind the counterexample. To extract such choices we propose two different methods. One method aims at supporting the debugging of counterexamples for safety properties violations. To do so, it builds a new model from the original one containing all the counterexamples, and then compares the two models to identify neighbourhoods. The other method supports the debugging of counterexamples for liveness properties violations. Given a liveness property, it extends the model with prefix / suffix information w.r.t. that property. This enriched model is then analysed to identify neighbourhoods.A model annotated with neighbourhoods can be exploited in two ways. First, the erroneous part of the model can be visualized with a specific focus on neighbourhoods, in order to have a global view of the bug behaviour. Second, a set of abstraction techniques we developed can be used to extract relevant actions from counterexamples, which makes easier their comprehension. Our approach is fully automated by a tool we implemented and that has been validated on real-world case studies from various application areas.Le model checking est une technique établie pour vérifier automatiquement qu’un modèle vérifie une propriété temporelle donnée. Lorsque le modèle viole la propriété, le model checker retourne un contre-exemple, i.e., une séquence d’actions menant à un état où la propriété n’est pas satisfaite. Comprendre ce contre-exemple pour le débogage de la spécification est une tâche compliquée pour plusieurs raisons: (i) le contre-exemple peut contenir un grand nombre d’actions; (ii) la tâche de débogage est principalement réalisée manuellement; (iii) le contre-exemple n’indique pas explicitement la source du bogue qui est caché dans le modèle; (iv) les actions les plus pertinentes ne sont pas mises en évidence dans le contre-exemple; (v) le contre-exemple ne donne pas une vue globale du problème.Ce travail présente une nouvelle approche qui rend plus accessible le model checking en simplifiant la compréhension des contre-exemples. Notre solution vise à ne garder que des actions dans des contre-exemples pertinents à des fins de débogage. Pour y parvenir, on détecte dans les modèles des choix spécifiques entre les transitions conduisant à un comportement correct ou à une partie du modèle erroné. Ces choix, que nous appelons neighbourhoods, se révèlent être de grande importance pour la compréhension du bogue à travers le contre-exemple. Pour extraire de tels choix, nous proposons deux méthodes différentes. La première méthode concerne le débogage des contre-exemples pour la violations de propriétés de sûreté. Pour ce faire, elle construit un nouveau modèle de l’original contenant tous les contre-exemples, puis compare les deux modèles pour identifier les neighbourhoods. La deuxième méthode concerne le débogage des contre-exemples pour la violations de propriétés de vivacité. À partir d’une propriété de vivacité, elle étend le modèle avec des informations de préfixe / suffixe correspondants à cette propriété. Ce modèle enrichi est ensuite analysé pour identifier les neighbourhoods.Un modèle annoté avec les neighbourhoods peut être exploité de deux manières. Tout d’abord, la partie erronée du modèle peut être visualisée en se focalisant sur les neighbourhoods, afin d’avoir une vue globale du comportement du bogue. Deuxièmement, un ensemble de techniques d’abstraction que nous avons développées peut être utilisé pour extraire les actions plus pertinentes à partir de contre-exemples, ce qui facilite leur compréhension. Notre approche est entièrement automatisée par un outil que nous avons implémenté et qui a été validé sur des études de cas réels dans différents domaines d’application

    Debugging of Behavioural Models with CLEAR

    Get PDF
    International audienceThis paper presents a tool for debugging behavioural models being analysed using model checking techniques. It consists of three parts: (i) one for annotating a behavioural model given a temporal formula , (ii) one for visualizing the erroneous part of the model with a specific focus on decision points that make the model to be correct or incorrect, and (iii) one for abstracting counterexamples thus providing an explanation of the source of the bug
    corecore